194 research outputs found

    Dependence of ground state energy of classical n-vector spins on n

    Full text link
    We study the ground state energy E_G(n) of N classical n-vector spins with the hamiltonian H = - \sum_{i>j} J_ij S_i.S_j where S_i and S_j are n-vectors and the coupling constants J_ij are arbitrary. We prove that E_G(n) is independent of n for all n > n_{max}(N) = floor((sqrt(8N+1)-1) / 2) . We show that this bound is the best possible. We also derive an upper bound for E_G(m) in terms of E_G(n), for m<n. We obtain an upper bound on the frustration in the system, as measured by F(n), which is defined to be (\sum_{i>j} |J_ij| + E_G(n)) / (\sum_{i>j} |J_ij|). We describe a procedure for constructing a set of J_ij's such that an arbitrary given state, {S_i}, is the ground state.Comment: 6 pages, 2 figures, submitted to Physical Review

    Relative Comparison Kernel Learning with Auxiliary Kernels

    Full text link
    In this work we consider the problem of learning a positive semidefinite kernel matrix from relative comparisons of the form: "object A is more similar to object B than it is to C", where comparisons are given by humans. Existing solutions to this problem assume many comparisons are provided to learn a high quality kernel. However, this can be considered unrealistic for many real-world tasks since relative assessments require human input, which is often costly or difficult to obtain. Because of this, only a limited number of these comparisons may be provided. In this work, we explore methods for aiding the process of learning a kernel with the help of auxiliary kernels built from more easily extractable information regarding the relationships among objects. We propose a new kernel learning approach in which the target kernel is defined as a conic combination of auxiliary kernels and a kernel whose elements are learned directly. We formulate a convex optimization to solve for this target kernel that adds only minor overhead to methods that use no auxiliary information. Empirical results show that in the presence of few training relative comparisons, our method can learn kernels that generalize to more out-of-sample comparisons than methods that do not utilize auxiliary information, as well as similar methods that learn metrics over objects

    Rademacher chaos complexities for learning the kernel problem

    Get PDF
    Copyright © 2010 The MIT PressCopyright © 2010 Massachusetts Institute of TechnologyWe develop a novel generalization bound for learning the kernel problem. First, we show that the generalization analysis of the kernel learning problem reduces to investigation of the suprema of the Rademacher chaos process of order 2 over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by well-established metric entropy integrals and pseudo-dimension of the set of candidate kernels. Our new methodology mainly depends on the principal theory of U-processes and entropy integrals. Finally, we establish satisfactory excess generalization bounds and misclassification error rates for learning gaussian kernels and general radial basis kernels
    corecore